7 research outputs found
Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources
Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset
Temporal Segmentation of Surgical Sub-tasks through Deep Learning with Multiple Data Sources
Many tasks in robot-assisted surgeries (RAS) can be represented by finite-state machines (FSMs), where each state represents either an action (such as picking up a needle) or an observation (such as bleeding). A crucial step towards the automation of such surgical tasks is the temporal perception of the current surgical scene, which requires a real-time estimation of the states in the FSMs. The objective of this work is to estimate the current state of the surgical task based on the actions performed or events occurred as the task progresses. We propose Fusion-KVE, a unified surgical state estimation model that incorporates multiple data sources including the Kinematics, Vision, and system Events. Additionally, we examine the strengths and weaknesses of different state estimation models in segmenting states with different representative features or levels of granularity. We evaluate our model on the JHU-ISI Gesture and Skill Assessment Working Set (JIGSAWS), as well as a more complex dataset involving robotic intra-operative ultrasound (RIOUS) imaging, created using the da Vinci® Xi surgical system. Our model achieves a superior frame-wise state estimation accuracy up to 89.4%, which improves the state-of-the-art surgical state estimation models in both JIGSAWS suturing dataset and our RIOUS dataset
Haptic Texture Rendering and Perception Using Coil Array Magnetic Levitation Haptic Interface: Effects of Torque Feedback and Probe Type on Roughness Perception
M.S. University of Hawaii at Manoa 2016.Includes bibliographical references.A Novel maglev-based haptic platform was deployed to investigate the effects of torque feedback and stylus type on human roughness perception. For this purpose, two haptic probes, fingertip and penhandle, were 3D printed each with one and four embedded magnets respectively.
Three different torque renderings namely No Torque, Slope Torque, and Stiff Torque were developed, in tendem with penetration-based force feedback to render simulated surfaces. The main difference between these conditions was the amount and type of active torque that was generated. Conventional magnitude estimation experiment for data gathering and analysis was performed.
The results of the experiment showed strong effects of wavelength within all torques and probes. Participants rated surfaces rougher in the Slope Torque and with the fingertip compared to penhandle. These results revealed new means of torque-based surface generation that lead to higher roughness perception. The outcomes also highlight the importance of probe type on human roughness perception
Recommended from our members
Towards Building Autonomy and Intelligence for Surgical Robotic Systems using Trajectory Optimization, Stochastic Estimation, Vision-based Control, and Machine Learning Algorithms
While the teleoperation framework has been successfully implemented for the surgical robots, especially for soft tissue interventions, the main challenge is that the surgeons are responsible for all actions taken, and all decisions made, during the entire surgery. As the robotics technology explores more complicated surgical interventions, teleoperation framework can become increasingly overwhelmingfor the human surgeons, who inherit limited sensing and motor control bandwidth, and can result in degraded surgical performance. Introduction of automation and intelligence into the robot-assisted interventions, where some of the surgical responsibilities are delegated to the AI agent, can substantially improve this framework and enhance the overall surgical outcome. Amongst many challenges of bringing autonomy into the surgical interventions, the main two technological ones pertain to the complexity of soft tissue environments and the inaccuracies of the surgical robotics systems. This dissertation aims at addressing these two challenges and proposes various solutions for different surgical robotic systems with applications to laparoscopic, orthopedic, and opthalmologic surgeries. Regarding planning of surgical subtasks, suturing and tissue manipulation which occur frequently in soft tissue surgeries are considered. For suturing task, two novel optimization-based needle motion planning algorithms, Fixed Center Motion (FCM) and Moving Center Motion (MCM), are proposed where the tissue trauma is minimized and a wide variety of suturing criteria (i.e., adequate depth) are met. An extensive simulations for each method were provided to (I) confirm the mathematical formulations and (II) to obtain optimal strategies under various suturing conditions. The FCM needle planning was deployed on Raven IV system with an open loop controller (i.e, no vision feedback) and experiment results confirmed the simulation and optimization outcomes. Regarding the tissue manipulation task, a new synergic learning method where human knowledge contributes to selecting intuitive features of tissue manipulation while the algorithm learns to take optimal actions is proposed. The method was tested on four different configurations in the simulations and the robot was able to successfully accomplish the task of tissue manipulation autonomously for all. To improve estimation and control accuracy of three surgical robotics systems, multiple frameworks are proposed. For the first category which pertain to cable-driven serial manipulators used for soft tissue surgeries, a 6 DoF visual servo controller using robot-camera calibration and realtime vision feedback was developed. The framework enabled the Raven IV surgical system to perform autonomous suturing task for various suturing trajectories and tissue compliance. For the second category which pertain to continuum manipulators with applications to orthopedic surgery and bronchoscopy, a novel stochastic sensor fusion algorithm, called Simultaneous Sensor Calibration and Deformation Estimation (SCADE), was introduced. SCADE addresses the problem of estimating calibration bias of FBG sensors as well as the shape/tip of the continuum surgical manipulators simultaneously in real-time. The algorithm was tested to estimate the tip position of a continuum manipulator within free and obstructed environments and showed superior performance compared to estimations from FBG sensor. For the third category which pertain to a robot-assisted cataract surgery system, a new hardware and software solution was proposed to estimate the tip location of surgical tools inside the eye during cataract surgery. The framework was developed andtested using a total of 31 pig eyes and results demonstrated efficacy of the proposed solution
Recommended from our members
Towards Building Autonomy and Intelligence for Surgical Robotic Systems using Trajectory Optimization, Stochastic Estimation, Vision-based Control, and Machine Learning Algorithms
While the teleoperation framework has been successfully implemented for the surgical robots, especially for soft tissue interventions, the main challenge is that the surgeons are responsible for all actions taken, and all decisions made, during the entire surgery. As the robotics technology explores more complicated surgical interventions, teleoperation framework can become increasingly overwhelmingfor the human surgeons, who inherit limited sensing and motor control bandwidth, and can result in degraded surgical performance. Introduction of automation and intelligence into the robot-assisted interventions, where some of the surgical responsibilities are delegated to the AI agent, can substantially improve this framework and enhance the overall surgical outcome. Amongst many challenges of bringing autonomy into the surgical interventions, the main two technological ones pertain to the complexity of soft tissue environments and the inaccuracies of the surgical robotics systems. This dissertation aims at addressing these two challenges and proposes various solutions for different surgical robotic systems with applications to laparoscopic, orthopedic, and opthalmologic surgeries. Regarding planning of surgical subtasks, suturing and tissue manipulation which occur frequently in soft tissue surgeries are considered. For suturing task, two novel optimization-based needle motion planning algorithms, Fixed Center Motion (FCM) and Moving Center Motion (MCM), are proposed where the tissue trauma is minimized and a wide variety of suturing criteria (i.e., adequate depth) are met. An extensive simulations for each method were provided to (I) confirm the mathematical formulations and (II) to obtain optimal strategies under various suturing conditions. The FCM needle planning was deployed on Raven IV system with an open loop controller (i.e, no vision feedback) and experiment results confirmed the simulation and optimization outcomes. Regarding the tissue manipulation task, a new synergic learning method where human knowledge contributes to selecting intuitive features of tissue manipulation while the algorithm learns to take optimal actions is proposed. The method was tested on four different configurations in the simulations and the robot was able to successfully accomplish the task of tissue manipulation autonomously for all. To improve estimation and control accuracy of three surgical robotics systems, multiple frameworks are proposed. For the first category which pertain to cable-driven serial manipulators used for soft tissue surgeries, a 6 DoF visual servo controller using robot-camera calibration and realtime vision feedback was developed. The framework enabled the Raven IV surgical system to perform autonomous suturing task for various suturing trajectories and tissue compliance. For the second category which pertain to continuum manipulators with applications to orthopedic surgery and bronchoscopy, a novel stochastic sensor fusion algorithm, called Simultaneous Sensor Calibration and Deformation Estimation (SCADE), was introduced. SCADE addresses the problem of estimating calibration bias of FBG sensors as well as the shape/tip of the continuum surgical manipulators simultaneously in real-time. The algorithm was tested to estimate the tip position of a continuum manipulator within free and obstructed environments and showed superior performance compared to estimations from FBG sensor. For the third category which pertain to a robot-assisted cataract surgery system, a new hardware and software solution was proposed to estimate the tip location of surgical tools inside the eye during cataract surgery. The framework was developed andtested using a total of 31 pig eyes and results demonstrated efficacy of the proposed solution